Some tests with pgpointcloud

  • LAS files are intractable for subsetting and visualisation without LASTools or Terrasolid or Bentley Map (or similar) which are awesome but costly.
  • Further, LAS files don't really let us store all manner of things along with the data, for example we already have issues storing digitised waveforms with the ACT dataset
  • so what can we do? Here are some experiments with PostGIS-pointcloud, one approach to point cloud data management

Some things about this exercise:

  • I am a postGIS/pgpointcloud n00b. An SQL ninja could probably do a lot better than this and a lot faster!
  • The data set used here has ~975 000 000 points in it
  • made from 16 ACT 8pt tiles (no waveform data, because stuff)
  • ingested using a PDAL pipeline (http://pdal.io)
  • LAS storage is ~30 GB
  • PG-pointcloud table is ~12 GB, so reasonably similar to .LAZ
  • there are 126 000 'patches' containing 5 000 points each
  • patches are the primary query tool, so we index over ... rows, which are arranged in a quasi-space-filling-curve
  • tradeoff between number of points in patch, and number of rows in DB
  • but generally speaking, scalable (nearly... research shows that we will hit a limit before 20 billion points)

Gathering modules


In [10]:
import os

import psycopg2 as ppg
import numpy as np
import ast
from osgeo import ogr

import shapely as sp
from shapely.geometry import Point,Polygon,asShape
from shapely.wkt import loads as wkt_loads
from shapely import speedups


import cartopy as cp
import cartopy.crs as ccrs

import pandas as pd
import pandas.io.sql as pdsql

import geopandas as gp

from matplotlib.collections import PatchCollection
from mpl_toolkits.basemap import Basemap

import fiona

from descartes import PolygonPatch

import matplotlib.pyplot as plt
%matplotlib inline

In [11]:
speedups.available


Out[11]:
True

In [12]:
speedups.enable()

Making a postgres connection with psycopg2


In [13]:
#  PGPASSWORD=pointy_cloudy psql -h localhost -d pcbm_pc -U pcbm
pg_connection = "dbname=pcbm_pc user=pcbm password=pointy_cloudy host=130.56.244.246"
conn = ppg.connect(pg_connection)
cursor = conn.cursor()

First query for some blocks - this gets them all!


In [14]:
#blocks_query = "SELECT pa::geometry(Polygon, 28355) AS geom, PC_PatchAvg(pa, 'Z') AS elevation, id FROM act_patches;"

blocks_query = "SELECT st_astext(PC_envelope(pa)::geometry(Polygon, 28355)) AS geom, PC_PatchAvg(pa, 'Z') AS elevation, id FROM act8pt_16tiles_pc;"

%time blks = pdsql.read_sql(blocks_query, conn)


CPU times: user 801 ms, sys: 733 ms, total: 1.53 s
Wall time: 53.9 s

In [15]:
#how many points?
len(blks)


Out[15]:
195016

In [16]:
len(blks)*5000


Out[16]:
975080000

Ingest data into a GeoPandas frame


In [17]:
blocks_query = "SELECT pa::geometry(Polygon, 28355) AS geom, PC_PatchAvg(pa, 'Z') AS elevation, id FROM act8pt_16tiles_pc where PC_PatchAvg(pa, 'Z') > 600;"

%time thepatches = gp.read_postgis(blocks_query, conn)


CPU times: user 2.87 s, sys: 652 ms, total: 3.53 s
Wall time: 1min 24s

In [18]:
thepatches.head()


Out[18]:
geom elevation id
0 POLYGON ((689254.9500000001 6090148.65, 689254... 601.116304 44216
1 POLYGON ((689276.84 6090181.13, 689276.84 6090... 600.149742 44235
2 POLYGON ((689324.28 6090206.07, 689324.28 6090... 601.294040 44242
3 POLYGON ((689361.39 6090183.93, 689361.39 6090... 601.540860 44293
4 POLYGON ((689386.23 6090184.86, 689386.23 6090... 601.708528 44297

In [19]:
blocks_query = "SELECT pa::geometry(Polygon, 28355) AS geom, PC_PatchAvg(pa, 'Z') AS elevation, id FROM act8pt_16tiles_pc where PC_PatchAvg(pa, 'Z') > 800;"

%time highpatches = gp.read_postgis(blocks_query, conn)


CPU times: user 17.4 ms, sys: 13.1 ms, total: 30.5 ms
Wall time: 46.3 s

Let's map the patches of data we collected - Black Mountain, above 800m high


In [20]:
highpatches.plot(column='elevation',colormap='BrBG')


/Users/adam/anaconda/lib/python3.5/site-packages/geopandas/plotting.py:225: FutureWarning: 'colormap' is deprecated, please use 'cmap' instead (for consistency with matplotlib)
  "(for consistency with matplotlib)", FutureWarning)
Out[20]:
<matplotlib.axes._subplots.AxesSubplot at 0x12b4f9710>

Now collect the points from the same region


In [21]:
points_query = "with pts as(SELECT PC_Explode(pa) as pt FROM act8pt_16tiles_pc where PC_PatchAvg(pa, 'Z') > 800 ) select st_astext(pt::geometry) from pts;"

#get raw point data, not as a geometry
#points_query =  "SELECT PC_astext(PC_Explode(pa)) as pt FROM act_patches where PC_PatchAvg(pa, 'Z') > 700 ;"

%time pts = pdsql.read_sql(points_query, conn)

# point storage schema:
# 1 = intens, 2 = ReturnNo, 3 = Numreturns, 4 = scandirectionflag, 5 = edgeofflightline
# 6 = classification (ASPRS), 7 = scananglerank, 8 = user data, 9 = pointsourceID
# 10 = R, 11 = G, 12 = B, 13 = GPSTime, 14 = X, 15 = Y, 16 = Z


CPU times: user 1.69 s, sys: 2.21 s, total: 3.9 s
Wall time: 1min 59s

In [22]:
#how many points did we get?
pts.size


Out[22]:
2614969

In [23]:
#had to check the schema to find point order...
schema_query =  "SELECT * FROM pointcloud_formats where pcid = 4;"

schm = pdsql.read_sql(schema_query, conn)
print(schm.schema)


0    <?xml version="1.0" encoding="UTF-8"?>\n<pc:Po...
Name: schema, dtype: object

In [24]:
pts.head()


Out[24]:
st_astext
0 POINT Z (690512.12 6094324.59 797.18)
1 POINT Z (690512.18 6094324.59 798.41)
2 POINT Z (690516 6094324.59 797.85)
3 POINT Z (690508.06 6094324.59 796.28)
4 POINT Z (690509.03 6094324.59 798.5)

In [25]:
thepoints = []

for point in pts.st_astext:
    this = wkt_loads(point)
    thepoints.append([this.x,this.y,this.z])

In [26]:
thepoints = np.squeeze(thepoints)

Plot the points


In [27]:
plt.scatter(thepoints[:,0], thepoints[:,1], c = thepoints[:,2], lw=0, s=5, cmap='BrBG')


Out[27]:
<matplotlib.collections.PathCollection at 0x12a361198>

Now make a pretty plot - points, patches in the subset, and all the patches in the region


In [28]:
fig = plt.figure()
fig.set_size_inches(25/2.51, 25/2.51)

BLUE = '#6699cc'
RED = '#cc6699'

ax = fig.gca() 
ax.scatter(thepoints[:,0], thepoints[:,1], c = thepoints[:,2], lw=0, s=3, cmap='BrBG')
for patch in thepatches.geom:
    ax.add_patch(PolygonPatch(patch, fc=BLUE, ec=BLUE, alpha=0.2, zorder=2 ))
    
for patch in thepatches.geom:
    ax.add_patch(PolygonPatch(patch, fc=BLUE, ec=RED, alpha=0.2, zorder=2 ))


Selecting points by classification


In [29]:
#ASPRS class 6 - buildings
bldng_query = "WITH filtered_patch AS (SELECT PC_FilterEquals(pa, 'Classification', 6) as f_patch FROM act8pt_16tiles_pc where PC_PatchAvg(pa, 'Z') > 800) SELECT st_astext(point::geometry) FROM filtered_patch, pc_explode(f_patch) AS point;" 
%time bld_pts = pdsql.read_sql(bldng_query, conn)


CPU times: user 16.8 ms, sys: 16.1 ms, total: 32.9 ms
Wall time: 51.7 s

In [30]:
bld_pts.head()

bldpoints = []

for point in bld_pts.st_astext:
    this = wkt_loads(point)
    bldpoints.append([this.x,this.y,this.z])
bldpoints = np.squeeze(bldpoints)

In [31]:
#ASPRS class 2 - ground
grnd_query = "WITH filtered_patch AS (SELECT PC_FilterEquals(pa, 'Classification', 2) as f_patch FROM act8pt_16tiles_pc where PC_PatchAvg(pa, 'Z') > 800) SELECT st_astext(point::geometry) FROM filtered_patch, pc_explode(f_patch) AS point;" 
%time grnd_pts = pdsql.read_sql(grnd_query, conn)


CPU times: user 368 ms, sys: 486 ms, total: 854 ms
Wall time: 1min 17s

In [32]:
grnd_pts.head()

grndpoints = []

for point in grnd_pts.st_astext:
    this = wkt_loads(point)
    grndpoints.append([this.x,this.y,this.z])
grndpoints = np.squeeze(grndpoints)

In [33]:
#ASPRS class 5 - high vegetation
hv_query = "WITH filtered_patch AS (SELECT PC_FilterEquals(pa, 'Classification', 5) as f_patch FROM act8pt_16tiles_pc where PC_PatchAvg(pa, 'Z') > 800) SELECT st_astext(point::geometry) FROM filtered_patch, pc_explode(f_patch) AS point;" 
%time hv_pts = pdsql.read_sql(hv_query, conn)


CPU times: user 867 ms, sys: 1.1 s, total: 1.97 s
Wall time: 1min 32s

In [34]:
hv_pts.head()

hvpoints = []

for point in hv_pts.st_astext:
    this = wkt_loads(point)
    hvpoints.append([this.x,this.y,this.z])
hvpoints = np.squeeze(hvpoints)

In [1]:
fig = plt.figure()
fig.set_size_inches(25/2.51, 25/2.51)

BLUE = '#6699cc'
RED = '#cc6699'

ax = fig.gca() 
ax.scatter(grndpoints[:,0], grndpoints[:,1], c = grndpoints[:,2], lw=0, s=3, cmap='plasma')
ax.scatter(bldpoints[:,0], bldpoints[:,1], c = bldpoints[:,2], lw=0, s=3, cmap='viridis')
ax.scatter(hvpoints[:,0], hvpoints[:,1], c = hvpoints[:,2], lw=0, s=3, cmap='BrBG')

for patch in thepatches.geom:
    ax.add_patch(PolygonPatch(patch, fc=BLUE, ec=BLUE, alpha=0.2, zorder=2 ))
    
for patch in highpatches.geom:
    ax.add_patch(PolygonPatch(patch, fc=BLUE, ec=RED, alpha=0.2, zorder=2 ))
plt.savefig('act_16tile_overhead.png')


---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-1-9cdd0e165168> in <module>()
----> 1 fig = plt.figure()
      2 fig.set_size_inches(25/2.51, 25/2.51)
      3 
      4 BLUE = '#6699cc'
      5 RED = '#cc6699'

NameError: name 'plt' is not defined

Add a 3D plot


In [57]:
#set up for 3d plots
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
import matplotlib.pylab as pylab

In [56]:
plt_az=300
plt_elev = 50.
plt_s = 2

cb_fmt = '%.1f'

fig = plt.figure()
fig.set_size_inches(30/2.51, 25/2.51)

ax0 = fig.add_subplot(111, projection='3d')
ax0.scatter(grndpoints[:,0], grndpoints[:,1],grndpoints[:,2], c=np.ndarray.tolist(grndpoints[:,2]),\
                lw=0, s=plt_s, cmap='plasma')
ax0.scatter(bldpoints[:,0], bldpoints[:,1],bldpoints[:,2], c=np.ndarray.tolist(bldpoints[:,2]),\
                lw=0, s=plt_s, cmap='viridis')
ax0.scatter(hvpoints[:,0], hvpoints[:,1],hvpoints[:,2], c=np.ndarray.tolist(hvpoints[:,2]),\
                lw=0, s=plt_s-1, cmap='BrBG')


Out[56]:
<mpl_toolkits.mplot3d.art3d.Path3DCollection at 0x3120a90b8>

Export to three.js?


In [ ]:


In [ ]:


In [ ]:


In [ ]:


In [31]:
#np.savetxt('ground_points_800.txt', grndpoints, delimiter=',')
#np.savetxt('bld_points_800.txt', bldpoints, delimiter=',')
#np.savetxt('hv_points_800.txt', hvpoints, delimiter=',')

To do:

  • too many things!
  • select from database by:
    • class (demonstrated here)
    • height above ground (need to integrate PDAL and PCL)
    • tree cover
    • intersection with object geometries

All very cool but why?

National elevation maps - storing and managing many billions of points as a coherent dataset for precise elevation estimation. Also aiming to store provenance - if there's a data issue, we need more than just points. We need to figure out why the issue occurred, and fix it. We can also store things like point accuracy, some QC metrics, whatever point attributes we like! Or points from manifold sources:

  • airborne LiDAR
  • terrestrial scanners
  • 3D photogrammetry
  • geophysical datasets (already as points in netCDF)
  • output of discrete element models (eg. Kool et al, or new sea ice models in development)

In [ ]: